Nvidia's GeForce GTS 250 graphics card

您所在的位置:网站首页 geforce 250gts Nvidia's GeForce GTS 250 graphics card

Nvidia's GeForce GTS 250 graphics card

2024-06-08 05:14| 来源: 网络整理| 查看: 265

The history of Nvidia’s G92 graphics processor is a long one, as these things go. The first graphics card based on it was the GeForce 8800 GT, which debuted in October of 2007. The 8800 GT was a stripped-down version of the G92 with a few bits and pieces disabled. The fuller implementation of G92 came in December ’07 in the form of the GeForce 8800 GTS 512MB. This card initiated the G92’s long history of brand confusion by overlapping with existing 320MB and 640MB versions of the GeForce 8800 GTS, which were based on an entirely different chip, the much larger (and older) G80. Those cards had arrived on the scene way back in November of 2006.

As the winter of ’07 began to fade into spring, Nvidia had a change of heart and suddenly started renaming the later members of the GeForce 8 series as “new” 9-series cards. Thus the GeForce 8800 GTS 512 became the 9800 GTX. And thus things remained for nearly ten weeks.

Then, in response to the introduction of strong new competition, Nvidia shipped a new version of the G92 GPU with the same basic architecture but manufactured on a smaller 55nm fabrication process. This chip found its way to market aboard a slightly revised graphics card dubbed the GeForce 9800 GTX+. The base clock speeds on the GTX+ matched those of some “overclocked in the box” GeForce 9800 GTX cards, and the performance of the two was essentially identical, though the GTX+ did reduce power consumption by a handful of watts. Slowly, the GTX+ began replacing the 9800 GTX in the market, as the buying public scratched its collective head over the significance of that plus symbol.

EVGA’s GeForce GTS 250 Superclocked

Which brings us to today and the introduction of yet another graphics card based on the G92 GPU, the GeForce GTS 250. This is probably the card that, by all rights, the 9800 GTX+ should have been, because it consolidates the gains that switching to a 55nm fab process can bring. Although its base clock speeds remain the same as the 9800 GTX+—738MHz for most of the GPU, 1836MHz for the shaders, and 1100MHz (or 2200 MT/s) for the GDDR3 memory—the GeForce GTS 250 is a physically smaller card, at nine inches long rather than 10.5″, and it has but a single six-pin auxiliary power connector onboard.

The reduction in power connectors is made possible by a new board design that cuts power consumption sufficiently to make the second power input superfluous. Although, we should note, Nvidia rates the GTS 250’s max board power at 150W, right at the limits of the PCI Express spec for this power plug configuration.

The GTS 250 is quite a bit shorter than the 9800 GTX

A single power connector will do, thanks

Along with the G92’s umpteenth brand name comes a price cut of sorts: the 512MB version of the GTS 250 will sell for about $130, give or take a penny, well below the price of 9800 GTX+ 512MB cards today. The GTS 250 also offers another possibility in the form of a 1GB variant, which Nvidia and its partners expect to see selling for about $150. That’s quite a nice price in the context of today’s market, where the GTS 250’s most direct competition, the Radeon HD 4850, sells for about $150 in 512MB form. Then again, things change quickly in the world of graphics cards, and Nvidia doesn’t expect GTS 250 cards to be available for purchase until March 10, a whole week from now.

Heck, they may have changed this thing’s name again by then.

There are some benefits to GPU continuity. As you can see in a couple of the pictures above, the GTS 250 retains the dual SLI connectors present on the 9800 GTX, and Nvidia says the GTS 250 will willingly participate in an SLI pairing alongside a GeForce 9800 GTX+ of the same memory size. Unfortunately, though, 512MB and 1GB cards will not match, and Nvidia’s drivers won’t treat a 1GB card as if it were a 512MB card for the sake of multi-GPU cross-compatibility, like AMD’s will.

The card we have in Damage Labs for review is EVGA’s GeForce GTS 250 Superclocked 1GB. Like many GeForce-based graphics cards, this puppy runs at clock speeds higher than Nvidia’s baseline. In this case, we’re looking at a fairly modest boost to a 771MHz core, 1890MHz shaders, and 1123MHz memory. You’ll pay about ten bucks for the additional speed; list price is $159. EVGA also plans to sell a 1GB card with clocks closer to stock speeds for $149. Odds are, neither of those cards will look exactly like the one in the pictures above, which is an early sample. EVGA intends for the final product to have a swanky black PCB and an HDTV out port, which our sample conspicuously lacks.

The Radeon HD 4850 goes to 1GB, too When we go to review a new graphics card, we tend to look for the closest competition to compare against it. In this case, the most obvious candidate, at least in terms of similar specifications, seemed to be a Radeon HD 4850 card with 1GB of memory onboard. Several board makers now sell 4850 cards with a gig of RAM, and Gigabyte was kind enough to send us an example of theirs.

That handsome cooler is a Zalman VF830, which Zalman bills as a “quiet VGA cooler.” Gigabyte takes advantage of the thermal headroom provided by this dual-slot cooler to squeeze a few more megahertz out of the Radeon HD 4850. The end result is a GPU core clock of 700MHz, up 20MHz from stock, with a bone-stock 993MHz GDDR3 memory clock.

Right now, prevailing prices on this card are running about $189 at online vendors, well above the GeForce GTS 250’s projected price. I wouldn’t be surprised to see AMD and its partners cut prices to match or beat the GTS 250 in the next couple of weeks, but given current going rates, the new GeForce would seem to have a built-in price advantage against the 4850 1GB.

Test notes You’ll have to forgive us. Since Nvidia sprung this card on us in the middle of last week, and since we rather presumptuously had plans this past weekend, we were not about to go and formulate a revised test suite and produce an all-new set of benchmark results for this card and thirteen or so of its most direct competitors, with all new drivers and new games. Instead, we chose a strategy that very much mirrors Nvidia’s, recycling a past product for a new purpose. In our case, we decided to rely upon our review of the GeForce GTX 285 and 295, published way back on January 15, for most of our test data.

This unflinchingly lame, sad, and altogether too typical exercise in sheer laziness and feckless ridiculosity nets us several wholly insurmountable challenges in our weak attempt at evaluating this new product and its most direct competitor. First and foremost, of course, is the fact that video card drivers have changed one or two entire sub-point-release revisions since our last article. So although we tested the GeForce GTS 250 and Radeon HD 4850 1GB with recent drivers, the remainder of our results come from well-nigh ancient and unquestionably much slower and less capable driver software, because everyone knows that video card performance improves 15-20% with each driver release. Never mind the fact that the data you will see on the following pages will look, on the whole, entirely comparable across driver revisions. That is a sham, a mirage, and our other results are entirely useless even as a point of reference.

As if that outrage weren’t sufficient to get our web site operator’s license revoked, you may be aware that as many as one or two brand-new, triple-A PC game titles have been released since we chose the games in our test suite, and their omission will surely cripple our ability to assess this year-and-a-half-old GPU. This fact is inescapable, and we must be made to suffer for it.

Finally, in a coup de grace fitting of a Tarantino flick, two of the games we used were tested at a screen resolution of 2560×1600, clearly a higher resolution than anyone with a $150 graphics card would ever use for anything. Ever. Do not be swayed by the reasonable-sounding voice in your ear that points out both games were playable at this resolution on this class of hardware. Do not be taken in by the argument that using a very high resolution serves to draw out the differences between 512MB and 1GB graphics cards, and answer not the siren song of the future-proofing appeal. Nothing about this test is in any way “real world,” and no one who considers himself legitimate as a gamer or, nay, a human being should have any part in such a travesty. You may wish to close this tab in your browser now.

Our testing methods As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz System bus QPI 4.8 GT/s (2.4GHz) Motherboard Gigabyte EX58-UD5 BIOS revision F3 North bridge X58 IOH South bridge ICH10R Chipset drivers INF update 9.1.0.1007 Matrix Storage Manager 8.6.0.1007 Memory size 6GB (3 DIMMs) Memory type Corsair Dominator TR3X6G1600C8DDDR3 SDRAM at 1333MHz CAS latency (CL) 8 RAS to CAS delay (tRCD) 8 RAS precharge (tRP) 8 Cycle time (tRAS) 24 Command rate 2T Audio Integrated ICH10R/ALC889A with Realtek 6.0.1.5745 drivers Graphics Asus EAH4850 TOP Radeon HD 4850 512MB PCIe with Catalyst 8.12 (8.561.3-081217a-073402E) drivers Dual Asus EAH4850 TOP Radeon HD 4850 512MB PCIe with Catalyst 8.12 (8.561.3-081217a-073402E) drivers Gigabyte Radeon HD 4850 1GB PCIe with Catalyst 9.2 drivers Visiontek Radeon HD 4870 512MB PCIe with Catalyst 8.12 (8.561.3-081217a-073402E) drivers Dual Visiontek Radeon HD 4870 512MB PCIe with Catalyst 8.12 (8.561.3-081217a-073402E) drivers Asus EAH4870 DK 1G Radeon HD 4870 1GB PCIe with Catalyst 8.12 (8.561.3-081217a-073402E) drivers Asus EAH4870 DK 1G Radeon HD 4870 1GB PCIe + Radeon HD 4870 1GB PCIe with Catalyst 8.12 (8.561.3-081217a-073402E) drivers Sapphire Radeon HD 4850 X2 2GB PCIe with Catalyst 8.12 (8.561.3-081217a-073402E) drivers Palit Revolution R700 Radeon HD 4870 X2 2GB PCIe with Catalyst 8.12 (8.561.3-081217a-073402E) drivers GeForce 9800 GTX+ 512MB PCIe with ForceWare 180.84 drivers Dual GeForce 9800 GTX+ 512MB PCIe with ForceWare 180.84 drivers

Palit GeForce 9800 GX2 1GB PCIe with ForceWare 180.84 drivers

EVGA GeForce GTS 250 Superclocked 1GB PCIe with ForceWare 182.06 drivers

EVGA GeForce GTX 260 Core 216 896MB PCIe with ForceWare 180.84 drivers

EVGA GeForce GTX 260 Core 216 896MB PCIe + Zotac GeForce GTX 260 (216 SPs) AMP²! Edition 896MB PCIe with ForceWare 180.84 drivers

XFX GeForce GTX 280 1GB PCIe with ForceWare 180.84 drivers

GeForce GTX 285 1GB PCIe with ForceWare 181.20 drivers

Dual GeForce GTX 285 1GB PCIe with ForceWare 181.20 drivers

GeForce GTX 295 1.792GB PCIe with ForceWare 181.20 drivers

Hard drive WD Caviar SE16 320GB SATA OS Windows Vista Ultimate x64 Edition OS updates Service Pack 1, DirectX November 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

Call of Duty: World at War 1.1 Crysis Warhead Dead Space Fallout 3 1.0.0.15 Far Cry 2 Left 4 Dead 3DMark Vantage 1.0.1 FRAPS 2.9.6

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Specs and synthetics Before we get to play any games, we should stop and look at the specs of the various cards we’re testing. Incidentally, the numbers in the table below are derived from the observed clock speeds of the cards we’re testing, not the manufacturer’s reference clocks or stated specifications.

Peakpixel fill rate(Gpixels/s) Peak bilinear texelfilteringrate (Gtexels/s) Peak bilinear

FP16 texelfilteringrate (Gtexels/s)

Peakmemorybandwidth(GB/s)

Peak shaderarithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

4.4 8.8 4.4 25.6 90 134

GeForce 9600 GT

11.6 23.2 11.6 62.2 237 355

GeForce 9800 GT

9.6 33.6 16.8 57.6 339 508 GeForce 9800 GTX+ 11.8 47.2 23.6 70.4 470 705 GeForce GTS 250 12.3 49.3 24.6 71.9 484 726 GeForce 9800 GX2 19.2 76.8 38.4 128.0 768 1152 GeForce GTX 260 (192 SPs) 16.1 36.9 18.4 111.9 477 715 GeForce GTX 260 (216 SPs) 17.5 45.1 22.5 117.9 583 875 GeForce GTX 280 19.3 48.2 24.1 141.7 622 933 GeForce GTX 285 21.4 53.6 26.8 166.4 744 1116 GeForce GTX 295 32.3 92.2 46.1 223.9 1192 1788 Radeon HD 4650 4.8 19.2 9.6 16.0 384 – Radeon HD 4670 6.0 24.0 12.0 32.0 480 – Radeon HD 4830 9.2 18.4 9.2 57.6 736 – Radeon HD 4850 10.9 27.2 13.6 67.2 1088 – Radeon HD 4850 1GB 11.2 28.0 14.0 63.6 1120 Radeon HD 4870 12.0 30.0 15.0 115.2 1200 – Radeon HD 4850 X2 20.0 50.0 25.0 127.1 2000 – Radeon HD 4870 X2 24.0 60.0 30.0 230.4 2400 –

The theoretical numbers in the table give the GeForce GTS 250 a clear advantage in texture filtering rates and memory bandwidth, while the Radeon HD 4850 has an equally sizeable lead in peak shader arithmetic capacity. But look what happens when we run these cards through 3DMark’s synthetic tests.

The 4850 1GB nearly matches the GTS 250 in the color fill test, which tends to be bound primarily by memory bandwidth, and the 4850 comes out on top in the texture fill rate test.

Meanwhile, the GeForce GTS 250 leads the 4850 in half of the shader processing tests, and our expectations are almost fully confounded. In this GPU generation, the theoretical peak capacities of the GPUs take a back seat to the realities of architectural efficiency. Although the G92 has more texture filtering potential and memory bandwidth on paper, the HD 4850 is stronger in practice. And although the 4850’s RV770 GPU has more parallel processing power than the G92, the GeForce tends to use its arithmetic capacity more effectively in many cases.

Far Cry 2 We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

Our two main contenders are very closely matched here. The Radeon HD 4850 1GB is faster at mere mortal resolutions, but the GTS 250 produces higher frame rates at four megapixels.

The most notable result here, perhaps, is the strong performance of these two new 1GB cards at 2560×1600, where even CrossFire and SLI configurations involving 512MB cards run out of headroom. Neither new card, in a single-GPU config, is really fast enough to be playable at this res, but the additional video RAM clearly brings an improvement, and these results suggest good things for multi-GPU configs with 1GB. (So do our results from higher-end multi-GPU configs involving 1GB cards, of course.)

Left 4 Dead We tested Valve’s zombie shooter using a custom-recorded timedemo from the game’s first campaign. We maxed out all of the game’s quality options and used 4X multisampled antialiasing in combination with 16X anisotropic texture filtering.

The scaling theme we established on the previous page continues here: the 4850 is faster at the lowest resolution, and the GTS 250’s relative performance becomes increasingly stronger as the resolution rises. Still, both cards produce nearly 60 FPS at 2560×1600, so playability is never in question.

Call of Duty: World at War We tested the latest Call of Duty title by playing through the first 60 seconds of the game’s third mission and recording frame rates via FRAPS. Although testing in this matter isn’t precisely repeatable from run to run, we believe averaging the results from five runs is sufficient to get reasonably reliable comparative numbers. With FRAPS, we can also report the lowest frame rate we encountered. Rather than average those, we’ve reported the median of the low scores from the five test runs, to reduce the impact of outliers. The frame-by-frame info for each card was taken from a single, hopefully representative play-testing session.

Both cards were fast enough to make play-testing, even at this high resolution and quality level, quite feasible. With that said, the low frame rate numbers in the twenties are a bit iffy, as is the feel of the game on these cards at this crazy-insane display resolution. On more reasonable 1920×1200 displays, either card should run this game fine. Just on the numbers, this is technically a win for the GTS 250, but it’s close enough to count as a tie in my book.

Fallout 3 This is another game we tested with FRAPS, this time simply by walking down a road outside of the former Washington, D.C. We used Fallout 3‘s “Ultra high” quality presets, which basically means every slider maxed, along with 4X antialiasing and what the game calls 15X anisotropic filtering.

The GTS 250 and the HD 4850 1GB produce identical median low frame rates of 34 FPS—eminently playable, in my book, at least in this section of the game. You can see how frame rates tend to rise and fall in a saw-tooth pattern as Fallout 3‘s dynamic level-of-detail mechanism does its thing.

Interestingly enough, the 4850 doesn’t seem to benefit much from having 1GB of memory, compared to the 512MB version, but the GTS 250 pretty clearly makes use of the additional video RAM. The 9800 GTX+ 512MB is much slower than the GeForce GTS 250 1GB, even in a dual-card SLI config.

Dead Space This is a pretty cool game, but it’s something of an iffy console port, and it doesn’t allow the user to turn on multisampled AA or anisotropic filtering. Dead Space also resisted our attempts at enabling those features via the video card control panel. As a result, we simply tested Dead Space at a high resolution with all of its internal quality options enabled. We tested at a spot in Chapter 4 of the game where Isaac takes on a particularly big and nasty, er, bad guy thingy. This fight is set in a large, open room and should tax the GPUs more than most spots in the game.

I’m just guessing, but I think maybe Nvidia’s drivers are a little bit better optimized for this game than AMD’s. Call it a hunch. Then again, even the slowest Radeon here cranks out acceptable frame rates, so it’s hard to get too worked up about it. Chalk up a win for the GTS 250, I suppose, though.

Crysis Warhead This game is sufficient to tax even the fastest GPUs without using the highest possible resolution or quality setting—or any form of antialiasing. So we tested at 1920×1200 using the “Gamer” quality setting. Of course, the fact that Warhead tends to apparently run out of memory and crash (with most cards) at higher resolutions is a bit of a deterrent, as is the fact that MSAA doesn’t always produce the best results in this game. Regardless, Warhead looks great on a fast video card, with the best explosions in any game yet.

The GTS 250 again has a slight edge over the Radeon HD 4850, which seems to be an emerging pattern. Neither card appears to benefit substantially from having 1GB of memory compared to its 512MB sibling.

Power consumption We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at 2560×1600 resolution, using the same settings we did for performance testing.

Recent GeForce cards have had some impressively low power consumption numbers at idle, and the GTS 250 continues that trend in surprising fashion, reducing power draw by 30W compared to the 9800 GTX+. That’s with the exact same 55nm G92 graphics processor and twice the memory capacity of the 9800 GTX+, even. Power draw is also down by 15W under load, and in both scenarios, the GeForce GTS 250 consumes less power than the Radeon HD 4850 1GB.

That said, the reductions in power use aren’t limited to the GeForces. Gigabyte’s Radeon HD 4850 1GB also represents measurable progress versus the 4850 512MB.

Noise levels We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

The GTS 250’s noise levels, both when idling and running a game, are some of the best we’ve measured in this round of tests. The new GeForce would look even better, relatively speaking, were it not up against some very quiet but essentially broken Asus custom coolers on the Radeon HD 4850 512MB.

Meanwhile, the strangely high noise levels for the Gigabyte Radeon HD 4850 1GB card, which match at idle and under load, are not a fluke. Although Gigabyte chose a nice, powerful Zalman cooler for this card, they did not see fit to endow this cooler with intelligent fan speed control. Or even kind-of-dumb fan speed control. In fact, there’s no fan speed control at all. When I asked Gigabyte why, the answer was: because this is an overclocked card. I wasn’t aware that eking out an additional 20MHz required the total destruction of a product’s acoustic profile, but that’s what’s happened here. And it’s a real shame. A real, puzzling shame.

GPU temperatures I used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, I recorded temperatures on the primary card.

With that fan spinning at 100% no matter what, the 4850 1GB certainly has some nice, low GPU temperatures. Meanwhile, the GTS 250 is easily quieter, but still keeps its temperatures well in check.

Conclusions At this point in the review, Nvidia’s marketing department would no doubt like for me to say a few words about some of its key points of emphasis of late, such as PhysX, CUDA, and GeForce 3D Vision. I will say a few words, but perhaps not the words that they might wish.

CUDA is Nvidia’s umbrella term for accelerating non-graphics applications on the GPU, about which we’ve heard much lately. ATI Stream is AMD’s term for the same thing, and although we’ve heard less about it, it is very similar in nature and capability, as are the underlying graphics chips. In both cases, the first consumer-level applications for are only beginning to arrive, and they’re mostly video encoders that face some daunting file format limitations. Both efforts show some promise, but I expect that if they are to succeed, they must succeed together by running the same programs via a common programming interface. In other words, I wouldn’t buy one brand of GPU over the other expecting big advantages in the realm of GPU-compute capability—especially with a GPU as old as the G92 in the mix.

One exception to this rule may be PhysX, which is wholly owned by Nvidia and supported in games like Mirror’s Edge and… well, let me get back to you on that. I suspect PhysX might offer Nvidia something of an incremental visual or performance advantage in certain upcoming games, just as DirectX 10.1 might for AMD in certain others.

As for GeForce 3D Vision, the GeForce GTS 250 is purportedly compatible with it, but based on my experience, I would strongly recommend getting a much more powerful graphics card (or two) for use with this stereoscopic display scheme. The performance hit would easily swallow up all the GTS 250 has to give—and then some.

The cold reality here is that, for most intents and purposes, current GeForces and Radeons are more or less functionally equivalent, with very similar image quality and capability, in spite of their sheer complexity and rich feature sets. I would gladly trade any of Nvidia’s so-called “graphics plus” features for a substantial edge in graphics image quality or performance. The GTS 250 comes perilously close to losing out on this front due to the Radeon HD 4850’s superior performance with 8X multisampled antialiasing. The saving grace here is my affection for Nvidia’s coverage sampled AA modes, which offer similar image quality and performance.

All of which leads us to the inevitable price and performance comparison, and here, the GeForce GTS 250 offers a reasonably compelling proposition. This card replicates the functionality of the GeForce 9800 GTX+ in a smaller physical size, with substantially less power draw, at a lower cost. I like the move to 1GB of RAM, if only for the sake of future-proofing and keeping the door open to an SLI upgrade that scales well. In the majority of our tests, the GeForce GTS 250 proved faster than the Radeon HD 4850 1GB, if only slightly so. If the Radeon HD 4850 1GB were to stay at current prices, the GTS 250 would be the clear value leader in this segment.

That’s apparently not going to happen, though. At the eleventh hour before publication of this review, AMD informed us of its intention to drop prices on several Radeon HD 4800 series graphics cards that compete in this general price range, mostly through a series of mail-in rebates. Some examples: this MSI 4850 512MB card starts at $159.99 and drops to $124.99 net via a mail-in rebate, and more intriguingly, this PowerColor 4870 512MB lists for $169.99 and has a $15 rebate attached, taking it to net price parity with the EVGA GTS 250 Superclocked card we tested. We hate mail-in rebates with a passion that burns eternal, but if these rebates were to last in perpetuity, the GeForce GTS 250 1GB at $149 nevertheless would be doomed.

For its part, Nvidia has indicated to us its resolve to be the price-performance leader in this portion of the market, so it may make an additional price move if necessary to defend its turf. Nvidia has limitations here that AMD doesn’t face, though, mainly because it doesn’t have the option of simply switching to GDDR5 memory to get twice the bandwidth. That is, after all, the only major difference between the Radeon HD 4850 and 4870. On the merits of its current GPU technology, AMD would seem to have the stronger hand.

What happens next is anybody’s guess. Just so long as Nvidia doesn’t rename this thing, I’ll be happy. There are GPU bargains ahead.

The Tech Report - Editorial ProcessOur Editorial Process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.


【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3